27 research outputs found

    Η λειτουργία του ΝΑΤΟ, υπό το πρίσμα της διαχείρισης καταστροφών και κρίσεων. Η επέμβαση στη Λιβύη: Μελέτη περίπτωσης

    Get PDF
    Η ιστορία της ανθρωπότητας, από την αρχή της ύπαρξής της και ανεξαρτήτως προελεύσεως και πολιτισμών, είναι γεμάτη από αναφορές σε μεγάλα καταστροφικά γεγονότα, τα οποία υπήρξαν η αφορμή μεγάλων αλλαγών και μεταβάσεων. Κατά καιρούς, ο άνθρωπος απέδωσε τα καταστροφικά φαινόμενα είτε σε θεϊκό μένος (θεο-μηνία), είτε σε ανώτερες υπερφυσικές δυνάμεις, αισθανόμενος (και όντας) αδύναμος να τα ελέγξει ή/και να τα δαμάσει. Στη φάση αυτή, κατά την οποία η συντριπτική πλειοψηφία αφορούσε γεγονότα φυσικής προέλευσης (φυσικές καταστροφές), στόχος του ανθρώπου ήταν απλά να καταφέρει να προσαρμοστεί και να επιβιώσει από το μέγεθος των επιπτώσεων των καταστροφών αυτών. Τον τελευταίο αιώνα όμως και καθώς η διεθνής εμπειρία περιλαμβάνει διαδικασίες καταγραφής και θεραπείας των καταστροφών, έγινε φανερό ότι εμφανίζονται και άλλοι τύποι καταστροφών, των οποίων η εκδήλωση δεν σχετίζεται απαραίτητα με φυσικά φαινόμενα, αλλά κυρίως με την ανθρώπινη δραστηριότητα (ανθρωπογενείς καταστροφές). Πρόκειται για γεγονότα εξίσου μεγάλης κλίμακας, με εξίσου σοβαρές επιπτώσεις με αυτές των φυσικών καταστροφών. Καθώς λοιπόν οι κοινωνίες εξελίσσονταν, άρχισαν να αναπτύσσουν μηχανισμούς αντίδρασης σε κρίσεις και καταστροφές, οι οποίοι προσανατολίζονταν κυρίως στη μετα-καταστροφική περίοδο, με στόχο τον μετριασμό των επιπτώσεων. Αργότερα, άρχισε δειλά-δειλά να υπεισέρχεται και η έννοια της πρόληψης. Στο πλαίσιο αυτό και λαμβάνοντας υπ’ όψιν ότι πρόκειται για γεγονότα τα οποία δεν μπορούν να οριοθετηθούν χωρικά (ούτε χρονικά), αναπτύχθηκαν μέσα στους κόλπους της διεθνούς κοινότητας και των Διεθνών Οργανισμών (ΟΗΕ, ΕΕ, ΝΑΤΟ), μοντέλα κοινής δράσης, πάντα υπό το πρίσμα των εξελίξεων στο ‘Παγκόσμιο Θέατρο’ της Διαχείρισης Καταστροφών (Πλαίσιο Δράσης του Hyogo 2005, Πλαίσιο για τη μείωση του Κινδύνου Καταστροφής του Sendai 2015). Ο στόχος της παρούσας Διπλωματικής Διατριβής είναι διττός. Αφενός, λαμβάνει χώρα μια προσπάθεια επεξήγησης και ανάλυσης των βασικών εννοιών της διαχείρισης καταστροφών και κρίσεων, με σαφή σκοπό την εξοικείωση του αναγνώστη και την κατανόηση από μέρους του της σπουδαιότητας του θεματικού αυτού πεδίου σε διεθνές επίπεδο. Αφετέρου, επιχειρείται μια παρουσίαση του τρόπου δράσης, καθώς και των μηχανισμών αντιμετώπισης, από πλευράς ΝΑΤΟ, των περιπτώσεων καταστροφών και κρίσεων. Τέλος, μελετάται ο τρόπος δράσης της Συμμαχίας στην περίπτωση της επέμβασης στη Λιβύη (Μελέτη Περίπτωσης – Case Study), παρατίθενται τα σχετικά συμπεράσματα, καθώς και τα διδάγματα (lessons learned), για μελέτη και μελλοντική αξιοποίηση.The history of Humanity, from the beginning of its existence, irrespective of origin and civilizations, is full of references of great catastrophic events, which were the cause of major changes and transitions. From time to time, humans attributed those catastrophic phenomena either to Gods’ fury or supernatural forces, feeling (and being) powerless to control and/or to master them. During this period, where the vast majority regarded events of natural origin (natural disasters), human’s target was just to manage to adapt and survive from the magnitude of the implications. However, during the last century and as international experience consists of disaster recording and cure methods, it was clear that there were also other types of disasters, the occurrence of which, had no necessary connection with natural events, but mainly with the human activity (manmade disasters). It refers to equally large-scale events, with equally serious impact, as that of natural disasters. So, as the communities evolved, they began to develop reaction mechanisms to deal with crises and disasters. Those mechanisms were mainly oriented to the post disaster period, trying to limit the possible consequences. Later, the notion of prevention started to make its presence noticeable. Into this framework and having into consideration that we have to deal with events that cannot be either spatially (or chronically) defined, international community and International Organizations (UN, EE, NATO), developed models of common action, always in the light of the global evolutions in the field of Disaster Management (Hyogo Framework for Action 2005, Sendai Framework for Disaster Risk Reduction 2015). The present Thesis has a dual target. Firstly, an attempt of ‘disaster and crises basic definitions’ analysis takes place, so as to familiarize the reader with this field of great importance, in an international level. Additionally, there is a presentation of NATO’s way of action and NATO’s reaction mechanisms, regarding disasters and crises. In the end, there is an approach to the Alliance’s way of action in the case of Libya intervention. Relative conclusions, as well as lessons learned are cited, for further study and future utilization

    Exceeding Conservative Limits: A Consolidated Analysis on Modern Hardware Margins

    Get PDF
    Modern large-scale computing systems (data centers, supercomputers, cloud and edge setups and high-end cyber-physical systems) employ heterogeneous architectures that consist of multicore CPUs, general-purpose many-core GPUs, and programmable FPGAs. The effective utilization of these architectures poses several challenges, among which a primary one is power consumption. Voltage reduction is one of the most efficient methods to reduce power consumption of a chip. With the galloping adoption of hardware accelerators (i.e., GPUs and FPGAs) in large datacenters and other large-scale computing infrastructures, a comprehensive evaluation of the safe voltage reduction levels for each different chip can be employed for efficient reduction of the total power. We present a survey of recent studies in voltage margins reduction at the system level for modern CPUs, GPUs and FPGAs. The pessimistic voltage guardbands inserted by the silicon vendors can be exploited in all devices for significant power savings. On average, voltage reduction can reach 12% in multicore CPUs, 20% in manycore GPUs and 39% in FPGAs.Comment: Accepted for publication in IEEE Transactions on Device and Materials Reliabilit

    Ανατομία της εκτίμησης της αξιοπιστίας σύγχρονων επεξεργαστών στο επίπεδο μικροαρχιτεκτονικής

    No full text
    The rapid development of semiconductor technologies is continuously increasing the density and complexity of modern microprocessor chips in favor of performance and functionality. This extreme scaling, however, negatively affects the reliable operation of microprocessors, making them more vulnerable to cosmic radiation, latent manufacturing defects, device degradation, and low voltage operation. As a consequence, modern microprocessor designs suffer from increased error rates and require the adoption of error protection mechanisms. However, protection does not come for free. It imposes area, power, and performance overheads that can eventually degrade the efficiency of a design compared to its ideal capabilities. It is important to realize the appropriate level of protection that corresponds to the actual requirements of a processor design. In order to do so, designers must know what these requirements are. Several techniques have been proposed to evaluate the reliability of a system. These are often performed on models of the actual design using simulators. The reliability evaluation of a design may reveal weak points of the design that require protection as well as components that can tolerate faults easier. This information can be used to guide design decisions related to the reliability of a system and thus, initiate a re-design iteration. The earlier this information is available, the smaller the cost of a re-design cycle. Microarchitecture-level hardware models (performance models) are often available in the very early design stages of a chip. We have developed GeFIN, a microarchitecture-level toolset that allows reliability evaluation of a system during this early stage. By focusing on the critical parameter of speed, we have developed novel techniques to accelerate the evaluation process without sacrificing the accuracy of the result. These can be used for several different studies, including (but not limited to): protection exploration, michroarchitectural design choices, workloads analysis etc. In order to provide an additional argument in benefit of the microarchitecture-level reliability assessment, we have compared, for the first time, the results of the evaluation of this level against the final product design, both in RTL simulation and silicon chips. Our findings show that the results of a microarchitecture-level assessment are highly correlated with the end-products’ for a particular series of system level effects, but also quantify the limitations of the methodology. The purpose of this dissertation is to improve the value of microarchitecture-level reliability assessment by providing new, advanced and comprehensive methodologies, insights on the accuracy of the method and by showcasing how it can be used to aid existing problems of the industry and academia.Η ραγδαία βελτίωση της τεχνολογίας ημιαγωγών αυξάνει συνεχώς την πυκνότητα και πολυπλοκότητα των σύγχρονων επεξεργαστώ στον βωμό της υψηλής απόδοσης και λειτουργικότητας. Όμως αυτή η εξέλιξη έχει αρνητικές συνέπειες στην αξιόπιστη λειτουργία των επεξεργαστών, καθιστώντας τους πιο ευάλωτους σε παράγοντες όπως η κοσμική ακτινοβολία, κατασκευαστικά λάθη, φθορά και λειτουργία σε χαμηλή τάση. Συνεπώς, οι σύγχρονοι επεξεργαστές υποφέρουν από αυξημένους ρυθμούς σφαλμάτων και προϋποθέτουν την υιοθέτηση μηχανισμών προστασίας. Η προστασία όμως δεν διατίθεται δωρεάν. Απαιτεί πόρους σε επίπεδο υλικού, ισχύος ή/και επιδόσεων ενώ μπορεί τελικά να μειώσουν την απόδοση ενός μοντέλου επεξεργαστή.Είναι λοιπόν σημαντικό να εφαρμοσθεί ένα σωστό επίπεδο προστασίας που ανταποκρίνεται στις απαιτήσεις του επεξεργαστή. Όμως για να συμβεί αυτό, οι σχεδιαστές πρέπει να γνωρίζουν ποιες είναι αυτές οι απαιτήσεις. Για τον λόγο αυτό, έχουν αναπτυχθεί αρκετές τεχνικές για την αξιολόγηση της αξιοπιστίας. Η αξιολόγηση μπορεί να αποκαλύψει αδυναμίες του σχεδιασμού που χρειάζονται επιπλέον προστασία ενώ ταυτόχρονα μπορεί να αποκαλύψει τμήματα που επιτυγχάνουν αυξημένη ανοχή στα σφάλματα. Αυτές οι πληροφορίες μπορούν να οδηγήσουν σχεδιαστικές επιλογές σχετικές με την αξιοπιστία και κατά συνέπεια, να εκκινήσουν έναν νέο γύρο επανασχεδιασμού. Όσο νωρίτερα είναι διαθέσιμες, τόσο μικρότερο θα είναι και το κόστος του επανασχεδιασμού.Τα μικροαρχιτεκτονικά μοντέλα (ή μοντέλα επιδόσεων) είναι συνήθως διαθέσιμα στα πρώιμα στάδια του σχεδιασμού ενός επεξεργαστή. Αναπτύξαμε τον GeFIN, ένα εργαλείο που επιτρέπει την αξιολόγηση της αξιοπιστίας σε αυτό το πρώιμο στάδιο. Εστιάζοντας στον κρίσιμο παράγοντα της ταχύτητας, αναπτύξαμε καινοτόμες τεχνικές για την επιτάχυνση της αξιολόγησης χωρίς την απώλεια ακρίβειας των αποτελεσμάτων. Αυτές οι τεχνικές μπορούν να χρησιμοποιηθούν σε πληθώρα μελετών, συμπεριλαμβανομένων: εξερεύνηση επιλογών προστασίας, σχεδιαστικές επιλογές μικροαρχιτεκτονικής, ανάλυση εφαρμογών κτλ. Σκοπεύοντας να ενισχύσουμε την επιχειρηματολογία υπέρ της αξιολόγησης σε επίπεδο μικροαρχιτεκτονικής, συγκρίναμε για πρώτη φορά τα αποτελέσματα της μεθόδου με τα αποτελέσματα επί του τελικού προϊόντος, σε επίπεδο RTL καθώς και σε φυσικά τσιπ. Τα αποτελέσματα μας δείχνουν ότι υπάρχει μεγάλη συσχέτιση σε κάποιες κατηγορίες φαινομένων, ενώ ταυτόχρονα αξιολογούνται οι αδυναμίες της μεθόδου. Στόχος αυτής της διατριβής είναι η συνολική βελτίωση της αξίας της αξιολόγησης σε επίπεδο μικροαρχιτεκτονικής, παρέχοντας νέες, προηγμένες και περιεκτικές μεθοδολογίες, αποτελέσματα που αποδεικνύουν την ακρίβεια της μεθόδου καθώς και μέσω της επίδειξης του τρόπου όπου μπορεί η τεχνική να εφαρμοστεί για να συμβάλει σε υφιστάμενα προβλήματα

    Adaptive Voltage/Frequency Scaling and Core Allocation for Balanced Energy and Performance on Multicore CPUs

    No full text
    Energy efficiency is a known major concern for computing system designers. Significant effort is devoted to power optimization of modern systems, especially in large-scale installations such as data centers, in which both high performance and energy efficiency are important. Power optimization can be achieved through different approaches, several of which focus on adaptive voltage regulation. In this paper, we present a comprehensive exploration of how two server-grade systems behave in different frequency and core allocation configurations beyond nominal voltage operation. Our analysis, which is built on top of two state-of-the-art ARMv8 microprocessor chips (Applied Micro’s X-Gene 2 and X-Gene 3) aims (1) to identify the best performance per watt operation points when the servers are operating in various voltage/frequency combinations, (2) to reveal how and why the different core allocation options on the available cores of the microprocessor affect the energy consumption, and (3) to enhance the default Linux scheduler to take task allocation decisions for balanced performance and energy efficiency. Our findings, on actual servers’ hardware, have been integrated into a lightweight online monitoring daemon which decides the optimal combination of voltage, core allocation, and clock frequency to achieve higher energy efficiency. Our approach reduces on average the energy by 25.2% on X-Gene 2, and 22.3% on X-Gene 3, with a minimal performance penalty of 3.2% on X-Gene 2 and 2.5% on X-Gene 3, compared to the default system configuration

    HealthLog Monitor: Errors, Symptoms and Reactions Consolidated

    No full text

    Assessing the Effects of Low Voltage in Branch Prediction Units

    No full text
    Branch prediction units are key performance components in modern microprocessors as they are widely used to address control hazards and minimize misprediction stalls. The continuous urge of high performance has led designers to integrate highly sophisticated predictors with complex prediction algorithms and large storage requirements. As a result, BPUs in modern microprocessors consume large amounts of power. But when a system is under a limited power budget, critical decisions are required in order to achieve an equilibrium point between the BPU and the rest of the microprocessor. In this work, we present a comprehensive analysis of the effects of low voltage configuration Branch Prediction Units (BPU). We propose a design with separate voltage domain for the BPU, which exploits the speculative nature of the BPU (which is self-correcting) that allows reduction of power without affecting functional correctness. Our study explores how several branch predictor implementations behave when aggressively undervolted, the performance impact of BTB as well as in which cases it is more efficient to reduce the BP and BTB size instead of undervolting. We also show that protection of BPU SRAM arrays has limited potential to further increase the energy savings, showcasing a realistic protection implementation. Our results show that BPU undervolting can result in power savings up to 69%, while the microprocessor energy savings can be up to 12%, before the penalty of the performance degradation overcomes the benefits of low voltage. Neither smaller predictor sizes nor protection mechanisms can further improve energy consumption

    Differential Fault Injection on Microarchitectural Simulators

    No full text
    Fault injection on microarchitectural structures modeled in performance simulators is an effective method for the assessment of microprocessors reliability in early design stages. Compared to lower level fault injection approaches it is orders of magnitude faster and allows execution of large portions of workloads to study the effect of faults to the final program output. Moreover, for many important hardware components it delivers accurate reliability estimates compared to analytical methods which are fast but are known to significantly over-estimate a structure’s vulnerability to faults. This paper investigates the effectiveness of microarchitectural fault injection for x86 and ARM microprocessors in a differential way: by developing and comparing two fault injection frameworks on top of the most popular performance simulators, MARSS and Gem5. The injectors, called MaFIN and GeFIN (for MARSS-based and Gem5-based Fault Injector, respectively), are designed for accurate reliability studies and deliver several contributions among which: (a) reliability studies for a wide set of fault models on major hardware structures (for different sizes and organizations), (b) study on the reliability sensitivity of microarchitecture structures for the same ISA (x86) implemented on two different simulators, (c) study on the reliability of workloads and microarchitectures for the two most popular ISAs (ARM vs. x86). For the workloads of our experimental study we analyze the common trends observed in the CPU reliability assessments produced by the two injectors. Also, we explain the sources of difference when diverging reliability reports are provided by the tools. Both the common trends and the differences are attributed to fundamental implementations of the simulators and are supported by benchmarks runtime statistics. The insights of our analysis can guide the selection of the most appropriate tool for hardware reliability studies (and thus decision-making for protection mechanisms) on certain microarchitectures for the popular x86 and ARM ISAs

    Analysis and Characterization of Ultra Low Power Branch Predictors

    No full text
    Branch predictors are widely used to boost the performance of microprocessors. However, this comes at the expense of power because accurate branch prediction requires simultaneous access to several large tables on every fetch. Consumed power can be drastically reduced by operating the predictor under sub-nomimal voltage levels (undervolting) using a separate voltage domain. Faulty behavior resulting from undervolting the predictor arrays impacts performance due to additional mispredictions but does not compromise system reliability or functional correctness. In this work, we explore how two well established branch predictors (Tournament and L-Tage) behave when aggressively undervolted below minimum fault-free supply voltage (V-min). Our results based on fault injection and performance simulations show that both predictors significantly reduce their power consumption by more than 63% and can deliver a peak 6.4% energy savings in the overall system, without observable performance degradation However, energy consumption can increase for both predictors due to extra mispredictions, if undervolting becomes too aggressive
    corecore